Your browser doesn't support javascript.
Show: 20 | 50 | 100
Results 1 - 2 de 2
Filter
Add filters

Language
Document Type
Year range
1.
IEEE Access ; 10:85571-85581, 2022.
Article in English | Scopus | ID: covidwho-2018604

ABSTRACT

Chest X-ray is one of the most common radiological examinations for screening thoracic diseases. Despite the existing methods based on convolution neural network that have achieved remarkable progress in thoracic disease classification from chest X-ray images, the scale variation of the pathological abnormalities in different thoracic diseases is still challenging in chest X-ray image classification. Based on the above problems, this paper proposes a residual network model based on a pyramidal convolution module and shuffle attention module (PCSANet). Specifically, the pyramid convolution is used to extract more discriminative features of pathological abnormality compared with the standard $3\times 3$ convolution;the shuffle attention enables the PCSANet model to focus on more pathological abnormality features. The extensive experiment on the ChestX-ray14 and COVIDx datasets demonstrate that the PCSANet model achieves superior performance compared with the other state-of-the-art methods. The ablation study further proves that pyramidal convolution and shuffle attention can effectively improve thoracic disease classification performance. © 2022 IEEE.

2.
Expert Systems with Applications ; : 118166, 2022.
Article in English | ScienceDirect | ID: covidwho-1936408

ABSTRACT

Medical image segmentation plays a crucial role in diagnosing and staging diseases. It facilitates image analysis and quantification in multiple applications, but building the right appropriate solutions is essential and highly reliant on the features of different datasets and computational resources. Most existing approaches provide segmentation for a specific anatomical region of interest and are limited to multiple imaging modalities in a clinical setting due to their generalizability with high computational requirements. To mitigate these issues, we propose a robust and lightweight deep learning real-time segmentation network for multi-modality medical images called MISegNet. We incorporate discrete wavelet transform (DWT) of the input to extract salient features in the frequency domain. This mechanism allows the neurons’ receptive field to enlarge within the network. We propose a self-attention-based global context-aware (SGCA) module with varying dilation rates to enlarge the field of view and designate the importance of each scale that enhances the network’s ability to discriminate features. We build a residual shuffle attention (RSA) mechanism to improve the feature representation of the proposed model and formulate a new boundary-aware loss function called Farid End Point Error (FEPE) that correctly segments regions with ambiguous boundaries for edge detection. We confirm the versatility of the proposed model by performing experiments against eleven state-of-the-art segmentation methods on four datasets of different organs, including two publicly available datasets (i.e., ISBI2017, and COVID-19 CT) and two private datasets (i.e., ovary and liver ultrasound images). Experimental results prove that the MISegNet with 1.5M parameters, outperforms the state-of-the-art methods by 1.5%–7% (i.e., dice coefficient score) with a corresponding 23× decrease in the number of parameters and multiply-accumulate operations respectively compared to U-Net.

SELECTION OF CITATIONS
SEARCH DETAIL